Regret bounds for Non Convex Quadratic Losses Online Learning over Reproducing Kernel Hilbert Spaces

ثبت نشده
چکیده

We present several online algorithms with dimension-free regret bounds for general nonconvex quadratic losses by viewing them as functions in Reproducing Hilbert Kernel Spaces. In our work we adapt the Online Gradient Descent, Follow the Regularized Leader and the Conditional Gradient method meta algorithms for RKHS spaces and provide regret bounds in this setting. By analyzing them as algorithms for losses over RKHS spaces we are able to get dimension-free regret bounds for potentially nonconvex losses, including quadratic losses. We apply our framework to the online eigenvector decomposition to include losses with a linear term. We also analyze other non-convex kernel losses and have regret bounds for the same.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Classification with non-i.i.d. sampling

β-mixing sequence Reproducing kernel Hilbert spaces ℓ 2-empirical covering number Capacity dependent error bounds a b s t r a c t We study learning algorithms for classification generated by regularization schemes in reproducing kernel Hilbert spaces associated with a general convex loss function in a non-i.i.d. process. Error analysis is studied and our main purpose is to provide an elaborate ...

متن کامل

ZigZag: A New Approach to Adaptive Online Learning

We develop a new family of algorithms for the online learning setting with regret against any data sequence bounded by the empirical Rademacher complexity of that sequence. To develop a general theory of when this type of adaptive regret bound is achievable we establish a connection to the theory of decoupling inequalities for martingales in Banach spaces. When the hypothesis class is a set of ...

متن کامل

Nonparametric Contextual Bandit Optimization via Random Approximation

We examine the stochastic contextual bandit problem in a novel continuous-action setting where the policy lies in a reproducing kernel Hilbert space (RKHS). This provides a framework to handle continuous policy and action spaces in a tractable manner while retaining polynomial regret bounds, in contrast with much prior work in the continuous setting. We extend an optimization perspective that h...

متن کامل

Error analysis for online gradient descent algorithms in reproducing kernel Hilbert spaces†

We consider online gradient descent algorithms with general convex loss functions in reproducing kernel Hilbert spaces (RKHS). These algorithms offer an advantageous way for learning from large training sets. We provide general conditions ensuring convergence of the algorithm in the RKHS norm. Explicit generalization error rates for q-norm ε-insensitive regression loss are given by choosing the...

متن کامل

Some Properties of Reproducing Kernel Banach and Hilbert Spaces

This paper is devoted to the study of reproducing kernel Hilbert spaces. We focus on multipliers of reproducing kernel Banach and Hilbert spaces. In particular, we try to extend this concept and prove some related theorems. Moreover, we focus on reproducing kernels in vector-valued reproducing kernel Hilbert spaces. In particular, we extend reproducing kernels to relative reproducing kernels an...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017